Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract To enumerate people experiencing homelessness in the United States, the federal Department of Housing and Urban Development (HUD) mandates its designated local jurisdictions regularly conduct a crude census of this population. This Point-in-Time (PIT) body count, typically conducted on a January night by volunteers with flashlights and clipboards, is often followed by interviews with a separate convenience sample. Here, we propose employing a network-based (peer-referral) respondent-driven sampling (RDS) method to generate a representative sample of unsheltered people, accompanied by a novel method to generate a statistical estimate of the number of unsheltered people in the jurisdiction. First, we develop a power analysis for the sample size of our RDS survey to count unsheltered people experiencing homelessness. Then, we conducted 3 large-scale population-representative samples in King County, WA (Seattle metro) in 2022, 2023, and 2024. We describe the data collection and the application of our new method, comparing the 2020 PIT count (the last visual PIT count performed in King County) to the new method of 2022 and 2024 PIT counts. We conclude with a discussion and future directions. This article is part of a Special Collection on Methods in Social Epidemiology.more » « less
-
Free, publicly-accessible full text available February 1, 2026
-
The rapidly increasing size of deep-learning models has renewed interest in alternatives to digital-electronic computers as a means to dramatically reduce the energy cost of running state-of-the-art neural networks. Optical matrix-vector multipliers are best suited to performing computations with very large operands, which suggests that large Transformer models could be a good target for them. In this paper, we investigate---through a combination of simulations and experiments on prototype optical hardware---the feasibility and potential energy benefits of running Transformer models on future optical accelerators that perform matrix-vector multiplication. We use simulations, with noise models validated by small-scale optical experiments, to show that optical accelerators for matrix-vector multiplication should be able to accurately run a typical Transformer architecture model for language processing. We demonstrate that optical accelerators can achieve the same (or better) perplexity as digital-electronic processors at 8-bit precision, provided that the optical hardware uses sufficiently many photons per inference, which translates directly to a requirement on optical energy per inference. We studied numerically how the requirement on optical energy per inference changes as a function of the Transformer width $$d$$ and found that the optical energy per multiply--accumulate (MAC) scales approximately as $$\frac{1}{d}$$, giving an asymptotic advantage over digital systems. We also analyze the total system energy costs for optical accelerators running Transformers, including both optical and electronic costs, as a function of model size. We predict that well-engineered, large-scale optical hardware should be able to achieve a $$100 \times$$ energy-efficiency advantage over current digital-electronic processors in running some of the largest current Transformer models, and if both the models and the optical hardware are scaled to the quadrillion-parameter regime, optical accelerators could have a $$>8,000\times$$ energy-efficiency advantage. Under plausible assumptions about future improvements to electronics and Transformer quantization techniques (5× cheaper memory access, double the digital--analog conversion efficiency, and 4-bit precision), we estimate that the energy advantage for optical processors versus electronic processors operating at 300~fJ/MAC could grow to $$>100,000\times$$.more » « less
-
Concepts covered in introductory electricity and magnetism such as electric and magnetic field vectors, solenoids, and electromagnetic waves are difficult concepts for students to visualize. Part of this difficulty may be due to the representation of three-dimensional objects on the two-dimensional planes of course textbooks and classroom whiteboards. The use of two-dimensional platforms limits the visualization of phenomena such as the vector field of a point charge or test charges traveling in the three-dimensional space of an electric field. In addition, working in two dimensions may add to students’ difficulties orienting their body correctly to use the right-hand rule when determining the direction of a magnetic field. These difficulties in visualization may limit the conceptual understanding of these fundamental topics. To promote conceptual understanding of electromagnetism we are cyclically developing and researching three spatial computing 3D environments covering electric fields, magnetic fields and electromagnetic waves. Each environment will be developed and tested in both augmented and virtual reality. The first of our environments, the electric field, has been built and tested in augmented reality (AR) with introductory physics students in the Fall 2023 semester. Our study is currently in phase IV of the National Science Foundation’s Design and Development Cycle. Data collected during phase II is being analyzed to support revision to the environment as well as data collection protocols. This article will outline findings from qualitative data gathered during the AR experience as well as during student post interviews following participation in the electric field space. These findings are characterized and then responded to with recommendations for the design team regarding content and testing procedures. In what follows, we first present a framework listing current knowledge regarding students' difficulties learning electric fields and how these guided our design of this electric field augmented reality environment. We next present themes that emerged from discussions during the experience as well as the post interviews. We conclude with suggestions to inform our second round of environmental design.more » « less
-
Free, publicly-accessible full text available February 1, 2026
-
Abstract We conducted an all‐sky imaging transient search with the Owens Valley Radio Observatory Long Wavelength Array (OVRO‐LWA) data collected during the Perseid meteor shower in 2018. The data collection during the meteor shower was motivated to conduct a search for intrinsic radio emission from meteors below 60 MHz known as the meteor radio afterglows (MRAs). The data collected were calibrated and imaged using the core array to obtain lower angular resolution images of the sky. These images were input to a pre‐existing LWA transient search pipeline to search for MRAs as well as cosmic radio transients. This search detected 5 MRAs and did not find any cosmic transients. We further conducted peeling of bright sources, near‐field correction, visibility differencing and higher angular resolution imaging using the full array for these 5 MRAs. These higher angular resolution images were used to study their plasma emission structures and monitor their evolution as a function of frequency and time. With higher angular resolution imaging, we resolved the radio emission size scales to less than 1 km physical size at 100 km heights. The spectral index mapping of one of the long duration event showed signs of diffusion of plasma within the meteor trails. The unpolarized emission from the resolved radio components suggest resonant transition radiation as the possible radiation mechanism of MRAs.more » « less
-
Storm-scale interactions with rough terrain are complex. Terrain has been theorized to impact the strength of low-level mesocyclones. Surface roughness and modifications of the surrounding environment also may impact tornadogenesis or tornado intensity. The Mountainburg, Arkansas EF2 tornado on 13 April 2018 traveled along a path with minor variations in intensity and elevation throughout most of the nearly 19-km (11.8 mi) damage path as the storm moved along a river valley. A detailed damage survey showed that the tornado then made an abrupt ascent of more than 200 m (656 ft) in the last 2 km (1.2 mi) before dissipating. By examining model soundings and conducting a detailed terrain analysis, this study examines what role terrain may have had in channeling the momentum surge and enhancing the low-level vorticity to influence tornadogenesis. Other storm-scale factors are investigated to determine their potential impact on the demise of the tornado. The differential reflectivity column is studied to determine if the updraft was weakening. The relative position of the tornado and mesocyclone also are examined as the tornado ascended the terrain and dissipated to determine whether the change in elevation impacted the overall strength of the storm and to evaluate whether the storm was undergoing a traditional occlusion cycle. Finally, a large-eddy simulation model is used to explore physical changes in a tornado encountering terrain similar to the Mountainburg, Arkansas, tornado near its demise.more » « less
-
Abstract Advanced Quantitative Precipitation Information (AQPI) is a synergistic project that combines observations and models to improve monitoring and forecasts of precipitation, streamflow, and coastal flooding in the San Francisco Bay Area. As an experimental system, AQPI leverages more than a decade of research, innovation, and implementation of a statewide, state-of-the-art network of observations, and development of the next generation of weather and coastal forecast models. AQPI was developed as a prototype in response to requests from the water management community for improved information on precipitation, riverine, and coastal conditions to inform their decision-making processes. Observation of precipitation in the complex Bay Area landscape of California’s coastal mountain ranges is known to be a challenging problem. But, with new advanced radar network techniques, AQPI is helping fill an important observational gap for this highly populated and vulnerable metropolitan area. The prototype AQPI system consists of improved weather radar data for precipitation estimation; additional surface measurements of precipitation, streamflow, and soil moisture; and a suite of integrated forecast modeling systems to improve situational awareness about current and future water conditions from sky to sea. Together these tools will help improve emergency preparedness and public response to prevent loss of life and destruction of property during extreme storms accompanied by heavy precipitation and high coastal water levels—especially high-moisture laden atmospheric rivers. The Bay Area AQPI system could potentially be replicated in other urban regions in California, the United States, and worldwide.more » « less
-
null (Ed.)ABSTRACT We present the time lag/delay reconstructor (TLDR), an algorithm for reconstructing velocity delay maps in the maximum a posteriori framework for reverberation mapping. Reverberation mapping is a tomographical method for studying the kinematics and geometry of the broad-line region of active galactic nuclei at high spatial resolution. Leveraging modern image reconstruction techniques, including total variation and compressed sensing, TLDR applies multiple regularization schemes to reconstruct velocity delay maps using the alternating direction method of multipliers. Along with the detailed description of the TLDR algorithm we present test reconstructions from TLDR applied to synthetic reverberation mapping spectra as well as a preliminary reconstruction of the Hβ feature of Arp 151 from the 2008 Lick Active Galactic Nuclei Monitoring Project.more » « less
An official website of the United States government

Full Text Available